perm filename FR81[S81,JMC]1 blob sn#581740 filedate 1981-04-23 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00015 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.if false then start "questions"
C00004 00003	.S Basic Research in Artificial Intelligence and Formal Reasoning
C00009 00004	.SS Summary of Accomplishments and Work Completed.
C00024 00005	.SS Some Unsolved Representation Problems
C00028 00006	.SS The ANALYST
C00047 00007	.SS Proposed Work
C00048 00008	.sss Formal Reasoning and Basic Artificial Intelligence
C00055 00009	.sss Representations and Reasoning: Theoretical Foundations
C00059 00010	.sss An Advice-Taking ANALYST Program
C00088 00011	.sss Architecture for Reasoning Programs
C00091 00012	.sss Formalization of Mathematical Reasoning
C00097 00013	.sss Formal Systems for Computation Theories
C00112 00014	.SS The Formal Reasoning Group
C00116 00015	.SS References
C00138 ENDMK
C⊗;
.if false then start "questions"


Hard copy to Marianne Soroka by Monday
Marianne get Budget from Fran (needs to know what form)



.end "questions"


.macro bs	⊂ begin indent 2,5; preface 0;  ⊃

.S Basic Research in Artificial Intelligence and Formal Reasoning

//pers John#McCarthy, Lewis#Creary, Jon#Doyle,
Richard#Gabriel, Jussi#Ketonen, Carolyn#Talcott, 
sra Jitendra#Malik, Joe#Weening


Applied research requires basic research to replenish the stock of ideas
on which its progress depends.

The long range goals of our work in basic AI and formal reasoning
are to make computers carry out the
reasoning required to solve problems.
This has involved studying the facts of the common sense world
and appropriate methods of common sense reasoning - both rigorous and
conjectural.  These facts and modes of reasoning constitute the
%2epistemological%1 (knowledge theoretic) aspect of the artificial
intelligence problem.  We have studied it apart from %2heuristic%1 (search
and pattern matching) problems as much as possible.

The need to study epistemological problems of AI apart from heuristics
has recently been recognized by many AI researchers.  We can cite Allen
Newell's not yet published presidential address to the AAAI
on the "logical level" of programs as well as work
by McDermott at Yale and Robert Moore at SRI.

Research in formalizing the facts of the common sense world for AI purposes
began with McCarthy's 1959 "Programs with Common Sense".  The much
used situational calculus was proposed in [McCarthy and Hayes 1969].
In recent years attention has turned to formalizing facts about knowledge,
belief and wants.  The problem of formalizing partial information about
concurrent processes long been a major barrier to progress in AI.

The work on rigorous reasoning has led to interactive proof generating
and proving programs including Weyhrauch's FOL and the newer EKL of
Ketonen.  Continued advances make it easier and easier to prepare
machine-aided and
machine-checkable proofs of mathematical results in general and the
correctness of computer programs in particular.

The work on conjectural reasoning has led to %2circumscription%1, a form
of non-monotonic reasoning described in [McCarthy 1980a].
Other forms of non-monotonic reasoning have been developed.

Indeed the epistemological studies have reached a point where we can
propose to make an experimental Advice Taker to implement the "intelligence"
ANALYST mentioned in our 1979 proposal as a potential long range
application.

Until now the main work of the Formal Reasoning Group has been
theoretical.  We plan to continue the theoretical work, but it
has advanced to the point where a major experimental program aimed at
eventual applications is possible.  We have decided to implement
a version of the Advice Taker proposed by McCarthy in 1958 [McCarthy
1959] concentrating on the "intelligence" ANALYST mentioned in our
previous proposal.

The advances that make ANALYST a feasible project include McCarthy
and Creary results on formalizing mental qualities and results of
McCarthy, Doyle and others on non-monotonic reasoning.  Besides our
own results, we rely on work by others including Gabriel on conjectural
reasoning based on comparative description matching.

.SS Summary of Accomplishments and Work Completed.

John McCarthy 
has continued his work on epistemological problems
of artificial intelligence, especially non-monotonic reasoning
and the representation of knowledge and belief in first order logic.
[McCarthy 1980] summarizes the state of the work on non-monotonic
reasoning as of Winter 1980, and [McCarthy 1979] summarizes the 1979
results on representation of concepts and propositions.
Beyond these results, McCarthy has developed a more general
form of circumscription suitable for automatically generating frame
axioms.

	He has also continued his work on the Elephant formalism
for representing programs.  This permits entirely conventional
programs to be regarded as sentences of logic from which their
properties follow without any special logic of programming.
This involves representing variables including the program counter
explicitly as functions of time and has led to controversy with
the advocates of temporal logic.  Elephant also allows explicit
reference to past events and actions without having to declare
variables or arrays to represent the information that has to
be remembered by the program.  This makes it a higher level language
than almost all present languages, but it is not yet clear how
to take best advantage of this fact.  Results on Elephant will
be published shortly [McCarthy 1981].


Luigia Aiello did extensive
work demonstrating how metatheory can be used to represent elementary 
algebra in a reasonable way.  She wrote an algebraic simplifier
that used meta reasoning to make decisions about which simplifications
should be used [Aiello and Weyhrauch 1980].
She also wrote a `compiler' for converting
effective axiomatic descriptions of functions (as part of a theory)
into programs (the interpretation in the intended model) [Aiello 1980].
The work was carried out using FOL [Weyhrauch 1977,1979].  
She used and extended technology developed by 
[Talcott and Weyhrach 1981] for exploiting the theory-metatheory
and theory-model connections that are basic to the FOL formalism.


Martin Brooks 
completed his thesis which describes how testing can be used to show a
program's correctness.[Brooks 1980]
The scenario for using testing to show correctness is that the programmer
writes a program P which is either correct, or which differs
from the correct program by one or more errors in a specified class E of errors.
The theory relates:
.begin indent 0,4; nojust; preface 0;
(1) P's structure.

(2) The combinatorics of how errors in E could have led to P (incorrectly)
being written.

(3) The type of program behaviour that the programmer will observe when
testing, for example the program's output, or a trace.

(4) The characterisitics of "complete" test data, which is guaranteed to reveal
the presence of errors in E, assuming all errors come from E.
.end

The theory is developed at a general level and is then specialized
to the case of recursive programs having only simple errors, where the
programmer observes a trace of each test run.
A method is devised for generating
test data, based on this application of the theory.
The generated test data is "complete",
meaning that if the only errors in the recursive program are of the specified types,
then the test data will
reveal the errors through at least one incorrect trace.

Viewed contrapositively,
assuming that the written program P is "almost correct", that is, differs from
the correct program at most by errors of the specified simple types E,
if the trace of each
test run is as the programmer intended, then the program must be correct.
The test data generation method has been implemented (in MacLisp) and examples
of its operation are given.

Lewis Creary's research during the past year has concentrated on
epistemological and representational problems of commonsense reasoning,
and of causal reasoning in particular.  He has constructed a general
epistemological framework for the study of commonsense factual reasoning that
is specifically oriented to the needs of research in artificial intelligence.
The framework, which is presented in [Creary 1981b], incorporates a "competing
considerations" model of factual reasoning, according to which such reasoning
involves the weighing and composition of all reasonably discoverable
considerations bearing on a set of target propositions.  While the work on
causal reasoning is not yet finished, it has yielded as a byproduct a
philosophical paper [Creary 1981a] on the need to take forces and related
causal influences seriously in the analysis of causal explanation.


Chris  Goad  has  completed  a  thesis  concerning  the  manipulation   of
algorithms expressed by formal proofs [Goad 1980a,b].  
A brief description of his  thesis work follows.

As has been known since the 1940's,  a proof of a proposition of the  form
`for all x there exists  some y such that the  relation R holds between  x
and y' can, under the right conditions, serve as a `program' for computing
values for  y from  values for  x.  Such  a proof  describes a  method  of
computation - an algorithm - in a  way which is rather different from  the
way in  which the  same algorithm  would be  described by  a  conventional
program.  For one thing, a  proof contains information about an  algorithm
which is not present in  the corresponding program.  Goad's thesis  showed
that the differences between proofs and ordinary programs can be exploited
in the  automatic specialization  and optimization  of algorithms.   Goad
worked out one non-trivial and fully concrete application, namely, the use
of proof  transformations  familiar from  proof  theory in  the  automatic
specialization of a bin-packing algorithm which was formally described  by
a proof.  The application was carried  out by use of a proof  manipulation
system  developed  by  Goad  on  the  Stanford  Artificial  Intelligence
Laboratory PDP-10 computer.  The application  was successful in the  sense
that  substantial   gains   in   efficiency   were   produced   by   proof
transformations which have no analogs among the transformations applicable
to ordinary programs.

Ben Moszkowski 
has been investigating how to model digital circuits 
using temporal logic and denotational semantics.  
The goal is to naturally, yet rigorously
describe and reason about signal propagation, asynchronous operation,
feedback and general input/output aspects of digital components.  
His methodology provides the means for proving proper behavior 
in actual circuits in a manner similar to that for higher level
program verification.  
The notation provides a complement to tabular analysis and computer
simulation for manipulating and checking digital devices and has the
advantage that it corresponds closely with the intuitive notions of
circuit principles.  In addition, it readily permits incomplete
specifications of devices and allows for formal proofs.  The formalism
naturally includes the ability to hierarchically structure behavior and
proofs so that the internal details of a component can be ignored once
desired properties have been substantiated.

Carolyn Talcott 
has completed the main work on the FOLISP project.  
This work involved expressing the ideas of [McCarthy and Cartwright 1979] 
for representation of LISP Program in first order logic 
in the FOL formalism [Weyhrauch 1977].
Proofs of a variety of properties of LISP programs were 
written and checked by FOL.  They included all the examples
discussed in "LISP: Programming and Proving" 
[McCarthy and Talcott 1981, Chapters III and IV].

Among the techniques illustrated are:
.bs
1. representation of recursive programs,

2. use of structural and rank induction to prove properties
of total recursive functions,

3. use of McCarthy's minimization schema to prove properties
of partial recursive functions,

4. representation of iterative programs using the Elephant
formalism [McCarthy 1980], and

5. use of the method of intermittent assertions 
[Manna and Waldinger 1978] to prove properties of 
iterative programs represented in the Elephant formalism.
.end

In addition the FOLISP system in FOL was developed sufficiently
to be useable by students in the advanced theory of computation
course for representing and checking their own proofs.
A description of this work is contained in [Talcott 1981].

Ma Xiwen
revised McCarthy's set of axioms for representing the
rather difficult knowledge puzzle of Mr. S and Mr. P and used
FOL to verify the translation of the knowledge problem to
a purely arithmetic problem.
This work is part of a study of the representation of facts about
knowledge that will aid the development of programs that
know what people and other programs do and don't know.

The following students completed their thesis work:
Juan Bulnes [Bulnes-Rozas 1979]
and David Wilkins [Wilkins 1979].


.SS Some Unsolved Representation Problems


Although much of our work is technical and abstract,  the results will have
important practical applications.  Also, consideration of solutions to
practical problems helps to isolate and clarify some of the technical issues.
For example, consider the problem of designing and effectively using data bases.
For data bases to include many types of information that decision makers
really need will require major advances in representation theory.  Programs
that use the information effectively impose requirements on the
representation and also need new modes of reasoning.  Thus current data
base technology at best allows simple relations to be represented - e.g.
"Smith is the supervisor of Jones."  Additions from current AI techniques
would allow simple generalizations of relations ("Every employee has a
supervisor except the director."), but this leaves a tremendous range of
representation problems untreated:
.skip 1
.bs
1. Mental states - what a person or group believes, knows, wants, fears, etc.

2. Modalities - what may happen, what must happen, what ought to be done, what
      can be done, etc.

3. Conjectures - if something were true what else would be the case.

4. Causality - how does one event follow because of another.
The preconditions of events and the consequences of events.
concurrent events and their laws of interaction and non-interaction.

5. Actions and their modifiers, e.g. "slowly".  

6. Ability - conditions under which a person or group can do something.

7. Obligation or owing.
.end

Facts of these kinds cannot be adequately represented in data bases at
present, and there are undoubtedly other phenomena essential for
intelligence which have yet to be discovered.  Before such facts can be
incorporated in data bases and question-answering programs in a general
way, basic research must determine the logical structure of these
concepts.

Before discussing our planned work on these epistemological problems
technically, we discuss an experimental application that we are now in a
position to undertake.

.SS The ANALYST

ANALYST scans a computerized "intelligence" data base.
It looks for
patterns of information that suggest conjectures about an adversary's
capabilities, intentions, knowledge, beliefs and goals.  When ANALYST
thinks it has found something significant, it informs a human analyst
about its conclusions (and if requested about how it reached them).  Its
forte is examining more data than is possible for a human and guaranteeing
that all possibilities of the kinds it is programmed for are pursued.  The
example is somewhat contrived, because we have emphasized the problems we
are currently studying and ignored others.  Moreover, the ANALYST we plan
to program will be quite
limited in its capabilities, partly because we have only partially solved
the problems we are studying, but mainly because of problems no-one has
begun to study.

Suppose ANALYST reads in a report from Damascus:
.begin indent 2,2

%2Major Alexei Ivanov went to Damascus airport, bought a ticket to Moscow
for cash, and departed on the next flight to Moscow%1.
.end

ANALYST asks itself why Ivanov did what he did.  Usually it finds
hypotheses that fit a normal pattern of behavior and gains nothing but
further defining the normal pattern.  Let's suppose there is more in this
case.

%2Why did he pay cash%1?  This question arises from a program that looks
for motives of actions that are found to deviate
from a normal pattern.  We suppose that the data base
contains a statement that Russians usually buy tickets from their travel
agency Intourist.  The conjecture is then formed that Ivanov is in a hurry
and that some event has required him to go to Moscow suddenly.  The
hypothesis is confirmed by discovering that he had previously accepted an
invitation incompatible with a Moscow trip.

Now suppose that it is known or conjectured that Ivanov is a radar expert.
This leads ANALYST to scan facts about our adversary relationship with the
Russians in the field of radar including the fact that we are trying to
find out the pattern of frequency variation of their radars.  One general
fact in the data base is that when one side finds out the frequency
variation pattern of a radar, the other side wants to change it.
%2Therefore, analyst conjectures that the Russians think we will soon know
the frequency variation pattern of their R111 radar%*.

This simple example poses several problems not handled by present database
programs.  First, present data base systems store particular facts, not
general facts.  General facts usually have to be built into programs or at
least into profuctions.  Second, the laws that determine what conclusions
can be drawn from facts about knowledge, belief and intentions are
different from those governing non-mental qualities, and present programs
don't use the laws that apply to these %2modal%1 concepts.  Third, the
reasoning required even to conjecture that Ivanov is in a hurry involves
conjecturing that his behavior is normal except in so far as specific
facts suggest abnormalities.  Fourth, many reasoning processes involve
observation as well as reasoning from facts in a data base.  Obtaining the
confirming evidence that Ivanov is in a hurry has that character.
Finally, the pattern recognition required to conjecture from Ivanov's
hurried Moscow trip and the previously known facts that they think we will
soon know their radar pattern is quite different from that done today.  We
shall consider these problems in turn.

.CB Representing general facts

Few existing data base systems represent general facts by entries in the
data base.  For example, ANALYST needs to represent the fact that the
Russians almost always buy their airline tickets from Intourist in such a
way that further deductions can be made and the fact can be modified by
new evidence.  Existing systems represent them by program or by
%2semi-programs%1 like productions.  This works very well for applying the
general facts to particular cases, but it doesn't work as well for deducing
new general facts from old ones or representing facts about facts.  In
order to represent general assertions, e.g. Russians buy their tickets
from Intourist, one needs quantifiers, and the most developed logical
system with quantifiers is first order logic.  Even within first order
logic, there are many possible ways of representing a particular kind of
fact, and much further study is required.

.CB Knowledge and belief

The notion %2X thinks Y will soon know Z%1 is not unusually complex when
adversaries try to outwit each other.  It presents problems for machine
representation that haven't been conclusively solved but on which we have
made recent progress.

In order to be useful ANALYST must do more than just represent the above
fact.  It must be able to prove or conjecture it under appropriate
circumstances and it must be able to draw correct conclusions from it -
and not draw incorrect conclusions.  The latter is the more immediate
problem.  Let us use a simpler example.  Suppose we have the sentences
%2Pat knows Mike's telephone number%1 and %2Mike's telephone number is the
same as Mary's%1.  A computerized deduction system that uses the rule that
equals may be substituted for equals might conclude %2Pat knows Mary's
telephone number%1.  This is not a legitimate deduction, even though it
would be legitimate to deduce that Pat dialed Mary's telephone number from
the fact that he dialed Mike's number and the fact that the numbers are
the same.

The fact that substitution of equals for equals is legitimate in some
contexts and not in others has been well known for a very long time.
Correct logical laws for handling such cases have been proposed, but the
presently known solutions have two defects.  First they usually treat only
one such function at a time such as "knows" while real life problems often
mix up several even in the same sentence, e.g. think and know - %2They
think we will soon know ...%1.  Second, each such mental quality requires
modifying the logic.  In a practical case this would be many months work
and might have to be done again and again.

.CB Conjectures

It has long been recognized that standard logic does not represent the
many kinds of reasoning that people use in forming conjectures.  It now
appears that much human reasoning involves conjecturing that the known
facts about a phenomenon are all the relevant facts.  Thus ANALYST must
conjecture that Ivanov was in a hurry because it has no other explanation
even though it cannot conclusively exclude other explanations.

Strict logical deduction does not permit drawing a conclusion from certain
facts that would be changed if additional facts, supplementing but not
contradicting them, were discovered.  In logic, if a conclusion follows,
it will still follow when more facts are added.  Humans, on the other
hand, are always drawing this kind of conclusion.  We now think that
machines must also reason this way, and that programs confined to strict
logical reasoning must either be unable to draw conclusions or they must
use axioms so unqualified that they are false.

There are two parts to the problem of drawing such tentative conclusions.
The first part is that of deciding what assumptions to make to facilitate
one's reasoning, and the second part is that of actually making those
assumptions.  The first of these problems is difficult in novel situations,
for one just knows one lacks information, without being sure exactly what
one should know.  However, in routine situations, one has already considered
or been told what are the reasonable assumptions to make, and these can
simply be stated as general, rule-of-thumb facts, that is, rules which
tolerate exceptions.  Partial solutions to both of these problems have
been found: to the first, a technique called circumscription [McCarthy 1980a],
and to the second, techniques called default rules and reason maintenance.

The problem of making routine assumptions has been approached in many
ways in artificial intelligence programs, perhaps the most common being
the technique of default fillers of frame slots.  These techniques all
had serious problems, though, as they did not interpret the statements of
these defaults carefully enough to ensure sensible behavior.  Recently
Jon Doyle proposed reason maintenance systems a first solution to this
problem [Doyle 1979, 1980].  Reason maintenance systems record the
inferences made by a reasoner, and examine the set of reasons to decide
on a coherent set of assumptions to adopt.  Routine assumptions appear
in this framework as default rules, such as "All birds fly," which are
used to construct non-monotonic justifications, records of non-monotonic
inferences.  In addition to their importance in making assumptions,
reason maintenance systems are instrumental in many other necessary functions
of reasoning programs, including database updates, explanation of database
entries, and, as explained next, decision-making.

.cb Decision-Making

A key problem for any agent carrying out complex activities or solving
difficult problems is that of making decisions.  For example, ANALYST must
make decisions about which possible interpretation to take of each new
fact learned, and must also make further decisions about how to fit these
individual decisions into global interpretations of what is going on.
Subjective Bayesian decision theory is perhaps the best developed formal
theory of making decisions, but proves overly restrictive in its requirement
of complete information about actions, consequences, and utilities.  Recently
Jon Doyle has developed a new technique for decision-making called reasoned
deliberation [Doyle 1980].  Reasoned deliberation views the decision-making
process as a reasoning task itself.  It involves incremental construction,
by means of general rules of thumb about decision-making, of the various
possible actions, their expected consequences, and the partial preferences
and utilites (when known) among them.  It applies these general rules in
a pattern of dialectical argumentation, in which one rule will construct
a reason for acting on one option, a second rule will challenge that reason,
a third rule will challenge the challenging reason, and so on.  
For example, ANALYST might counter the proposed interpretation of Ivanov's
return to Moscow with an observation that the local police arrested a burglar
who could implicate Ivanov in several crimes.  However, ANALYST might go
further and counter this new interpretation with the general fact that
Ivanov is meticulous about getting blackmail information about the local
police chief wherever he goes.  But this might in turn be countered by
an observation that the previous police chief just retired, and that there
hasn't been enough time for Ivanov to investigate the new chief.
These general
rules are important in succinctly specifying special cases and exceptions to
other general rules.  This technique makes important use of the maintenance
and explanation facilities provided by reason maintenance techniques.
We hope to combine these ideas with those of Gabriel on comparative matching
of descriptions to yield a powerful conjectural (heuristic) component that 
interacts cleanly with a formal, deductive mechanism.


.CB Patterns

Many of the patterns ANALYST will have to recognize do not fall into the
categories so far treated in AI work.  For example, explaining an unknown
activity of an adversary requires conjecturing a goal and its relation to
other goals, a belief structure that makes the goal seem desirable and
attainable, and a means of attaining the goal that gives rise to the
observations.  Present AI pattern recognition programs find patterns in
observed data rather than introduce new entities in order to explain the
data.

.SS Proposed Work

Now we present in more detail the technical aspects of our planned work.

.sss Formal Reasoning and Basic Artificial Intelligence

John McCarthy will direct the work on formal reasoning and basic
artificial intelligence and will continue his own research in the
epistemological problems of artificial intelligence and the theory of
program verification.  Besides this he will take part in the specification
and implementation of ANALYST.

McCarthy's recent research efforts have mainly involved extending the
circumscription method of non-monotonic reasoning described in [McCarthy
1980].  Formal methods of non-monotonic reasoning are a development of the
last five years.

There is a good prospect for breakthroughs on the long-standing
"qualification problem" in the next few years using circumscription and
various generalizations.  Two ideas are involved.

First, frame axioms are derivable from a slight generalization of the
circumscription method of [McCarthy 1980].  The generalized
circumscription gives a sentence schema stating that certain functions and
predicates minimize the truth value of a certain sentence while holding
other sentences true.  It is a logical analog of the mathematical method
of minimization subject to constraints of the calculus of variations
although it is not yet apparent that the analogy can be exploited.  It
allows us to specify that the only entities that change when an event
takes place are those that the description of the event and associated
facts force to change. Circumscription permits this to be done without an
initial commitment as to what these facts will be.

For example, we can tell ANALYST that the event of a person moving from
one city to another leaves fixed everyone whom it doesn't say moves.  The
fact that a person's family moves with him is then an add-on rather than a
modification.  This meets the requirement that the person or computer
program adding the fact that when a person moves his family also moves
need not understand the initial axiom giving the effects of moving.  It is
a key property of intelligent systems that they can be advised and their
behavior modified by someone who does not understand the details of how
they work.  It will be especially important when knowledge representation
and problem solving systems like ANALYST contain many tens of thousands of
facts.

The second new application of circumscription is more difficult but
probably more far-reaching.  Suppose we need to tell ANALYST that the
Russians sometimes attempt to bribe foreign officials to reveal secrets
and that they can be embarassed when such attempts are revealed.  This
seems straightforward and a human would have no difficulty absorbing the
information.  However, for any present knowledge representation system to
accept the information, a number of very pedantic issues would have to be
resolved: (1) %2Does attempting to bribe an official include offering a
bribe to someone who is mistakenly believed to be an official?  (2) Does
it include offering a bribe to someone who is not believed to be an
official but turns out to be?  Must there be an individual who is offered
the bribe or can letting it be known that a bribe is available count as an
attempt to bribe an official%1?  Intuitively, all these issues seem to be
irrelevant.  They probably won't come up, and if they did, they could be
resolved in an ad hoc way.  However, present knowledge representation
systems require definitions applicable to all cases.

Circumscription and other forms of non-monotonic reasoning apparently
allow us to make systems that may be characterized as "ambiguity tolerant"
- something that Hubert Dreyfus said was impossible for computers.  The
idea, not yet well worked out, is to be able to say that a sentence is
unambiguous unless two interpretations are proposable.  This will allow an
intelligence specialist to tell ANALYST about propensities to attempt to
bribe without being aware that ambiguities are possible, and it will allow
ANALYST to use the concept without problem until and unless ambiguous
examples actually arise.

McCarthy will also continue his research on the ELEPHANT formalism for
representing sequential programs in first order logic.

Goals and milestones

⊗ December 1981 Completion of paper on ELEPHANT.

⊗ June 1982 Completion of paper on use of circumscription to
get "ambiguity tolerance".

.sss Representations and Reasoning: Theoretical Foundations

Lewis Creary is continuing his work on the epistemology of artificial
intelligence and the representation of knowledge, with the current overall
goal being twofold:
.begin indent 0,3;

1. to formalize the main concepts involved in the planning of actions, and to
provide a sound truth-theoretic semantics for the resulting representations
[Creary 1981d].  This will include treatments of:
.end
.begin indent 3,6;

a. actions, events, situations, properties, and temporal relationships

b. counterfactual planning conditionals

c. causal relations among actions, events, and causal influences (i.e.,
generalized forces).

d. propositional attitudes (largely complete; see [Creary 1979])

.end
.begin indent 0,3;
2. to provide a sound epistemological basis for commonsense reasoning
involving the above notions in general, and causal reasoning in particular.
[Creary 1981b] is a first step in this direction (the ideas involved are
briefly discussed in point 4 of the following subsection).  Applications and
extensions of the framework presented in that paper will include:
.end
.begin indent 3,6;

a. extension of the general theory of "competing considerations" to cover
various kinds of non-factual reasoning

b. specialization of the general theory to deal with causal reasoning, the
counterfactually conditional conclusions involved in AI planning applications,
and the frame and qualification problems [Creary 1981c].

c. applications involving detailed investigation of particular problems, for
which specific sets of epistemic statuses and consideration types will be
developed.  The reasoning involved will deal with the planning of
communications and travel problems, and will involve both causal reasoning,
and the sort of simulative reasoning about the propositional attitudes of
others that is discussed in point 6 of the following subsection.  These
investigations will be conducted in conjunction with work on the advice-taking
ANALYST program (see next subsection), which will supply the detailed
stimulation and feedback needed.

.end

Almost all of the theoretical developments just discussed will be be directly
applied in the new advice-taking ANALYST program (see next subsection), on
which Creary is also working.
.sss An Advice-Taking ANALYST Program

Recent progress by McCarthy and Lewis Creary in formalizing knowledge about
propositional attitudes, by McCarthy in formalizing the new method of
circumscription, and by Creary in explicating the structure of commonsense
factual reasoning, suggests that problem solvers using the new ideas may be
able to treat naturally some problems of the sort that ANALYST (see section
1.3) might face -- problems that are inaccessible to or awkward for previous
general problem solvers.  These problems include those whose solution requires
non-trivial reasoning concerning the beliefs and desires of other organisms,
and those requiring the making of reasonable, but fallible, conjectures.
Checking this requires experiment.

Creary's earlier design studies toward an advice-taking problem solver that
implements these theoretical developments have recently been found to be
compatible with and complementary to some ideas and associated software
developed by Richard Gabriel as part of his doctoral research on the
organization of very large intelligent programs.  Since Gabriel has recently
joined the formal reasoning group as a collaborator on this project, it now
appears that the most efficient route to an initial implementation is to
rework, for our purposes, some of his existing domain-independent
problem-solving software.

.cb Goal

⊗ We propose to construct an advanced, advice-taking problem solver for
analyzing interesting kinds of "intelligence" information -- in short, a first
incarnation of ANALYST.  This program will attempt to infer concealed facts
from visible facts and reasonable conjectures that it has made.  It will deal
with information about beliefs, desires, purposes, preferences, aversions, and
fears.  It will both ascribe purposes to actions and use ascribed purposes
to predict actions.  It will sometimes infer the existence of previously
unknown hidden entities from their observed effects.

An important skill of a human ANALYST, which we hope to model, is the
ability to infer the goals and plans of other people from the facts of their
actions. 
At this point there are several distinctions that need to be made with
respect to the inference or recognition problem.
A %2plan%1 is more than simply an orderly arrangement of actions; it is
an orderly arrangement plus an association of the actions to goals and
subgoals of the actor (the person making the plan). To infer a plan as an
orderly arrangement of known actions is simple: it is simply the 
orderly arrangement of those actions. The interesting cases are
where the goals are not all known and/or not all of the actions
are known. In the case that the goals are known but not all of the
actions known is called %2plan recognition%1. In the case that not all
of the goals are known is called %2goal-plan recognition%1.

Goal-plan recognition is quite formidable since it is a
superset of the plan production problem.
Furthermore, in the ANALYST problem there is the need
to deduce the motivations for the goals and plan, to locate unusual aspects
of the actions involved in the plan, and to deduce the underlying, 
deliberately hidden motivations for those aspects.  Thus, the ANALYST
subsumes the abilities of a planning/problem solving agent with significant
further abilities.

Since it is unreasonable to expect that the set of general facts about
peoples' activities along with the specific habits and traits of particular
people of interest to ANALYST can or should be procedurally encoded within
reasoning programs, and since the addition of new information, both by ANALYST
itself and by the human interacting with the program, should be quick and
easy, an Advice-Taker style program, with facts represented in a modular,
formal representation is strongly indicated.

.cb Motivation

In general, we wish to keep our thinking in touch with the prime criterion
of value for ideas in artificial intelligence: their contribution to the
successful performance of problem-solving, question-answering, and
learning programs.

More specifically, we wish to be able to test realistically (and, when
possible, to demonstrate) the problem-solving power of ideas, approaches,
and techniques developed within the AI & Formal Reasoning Group.

Still more specifically, we wish to investigate the problem-solving
power of the new ideas mentioned above concerning

.begin indent 0,3; preface 0; nojust;
a) the representation of facts about knowledge, belief, and desire,

b) the form of various kinds of conjectural reasoning,
.end

A related, but somewhat different, motivation is the desire to determine
where the weakest threads are in the fabric of current AI heuristic
technology -- especially in relation to the solution of problems of the
type characteristically investigated by our group.
 
.cb Design Criteria

.begin indent 0,3;
A.  The problem solver should provide a convenient and effective test bed
and development tool for the research conducted by members of the AI &
Formal Reasoning Group.  Accordingly, it should emphasize (but not
overemphasize) the role of logic in problem solving, and should facilitate
the evaluation and improvement of ideas and techniques developed within
the group.

B.  The design should reflect the present state of the art in problem
solving.  We don't want to re-invent anything, or (still worse) to have
readily avoidable deficiencies in the program.  We want to provide the
best possible problem-solving environment in which to try out our ideas.

C.  The problem solver should be readily revisable, in order to
facilitate and encourage experimentation with a number of different
problem-solving designs.  This ease of revision is essential if the
problem solver is to function over a significant period of time as a truly
useful research tool.  The way to achieve this goal is to design the
program in such a way that those conceptually independent features of it
which might be subject to change (memory retrieval algorithm, problem
solving control structure, etc.,) are variable independently of one
another in the program, and are represented by easily variable program
elements such as production rules, parameter tables, etc.  The implication
of this approach is that we should try to design, not a particular problem
solver, but a system or framework for problem solving that can be
"programmed" to instantiate any of a large number of particular problem
solvers.  Concern with this issue need not bog us down in system building,
however; it should be possible to start by writing a particular problem
solver with built-in generalization points.  The important thing is to
keep this design goal in mind from the beginning.
.end

.cb Relation to FOL and EKL

The elementary parts of FOL and EKL, on one hand,
and the proposed new problem solver, on the other, are the
tools of different, complementary approaches to research in artificial
intelligence.  In their simplest form, FOL and EKL may be viewed as the
proof-checking component in McCarthy and Hayes's [1969] "Missouri program"
(MP), and as such is a tool of epistemological research; such research may
(and probably should) be carried quite some distance in abstraction from
heuristic issues before attempts are made to implement ideas in a full
problem-solving context.  On the other hand, the proposed new problem
solver is a version of McCarthy and Hayes's "reasoning program" (RP), and,
in the presence of basic proof-checkers, can be reserved for study of the primarily
heuristic aspects of a given class of problems.  It is to be expected that
research economies will be achieved by using the two components thus in
tandem, since it will probably be cheaper to check the epistemological
adequacy of ideas with FOL or EKL than with the problem solver.  Moreover, this
division of labor will permit individual researchers to focus closely on
manageable subparts of a given AI problem.

.cb Basic Approach and Leading Ideas

Our basic approach to construction of an ANALYST will be to use
state-of-the-art AI techniques to implement John McCarthy's "Advice Taker"
program essentially along the lines first laid down in his paper "Programs
with Common Sense" [1959], and extended in [McCarthy and Hayes 1969] and
subsequent papers by McCarthy.  A review of the Advice-Taker design in the
light of currently available AI techniques suggests that many of its basic
concepts are sound and viable, and that prospects for a successful
implementation are good.  More specifically, we intend to incorporate the
following high-level design ideas into our initial implementation:

.begin indent 0,3;
1.  %2A general framework for the structuring, description, and use of
specialized knowledge as the foundation for a meta-general problem solver%1.
Much knowledge properly distributed throughout memory in %2concepts%1 is
required for a system to reason effectively and efficiently about the means of
achieving desired goals, and to analyze intelligently information concerning
the behavior of potential adversaries.

2.  %2Interactive advice seeking and taking as a means of knowledge
acquisition%1.  According to McCarthy's 1959 conception of it, advice taking
is a form of goal-directed information transfer.  The goal is to improve the
performance of the advice taker by giving it information (advice), in a manner
that requires of the advice giver "little if any knowledge of the program or
the previous knowledge of the advice taker".  Under these conditions, it is
unlikely that the advice giver will be very good at guessing what knowledge
the program needs in order to improve its performance in the desired way.
This suggests that an interactive exchange is needed between the advice giver
and taker, in which the giver first sets the improvement goals, and the
program then determines what new knowledge, if any, it needs to be able to
make the specified improvements with a reasonable amount of effort, and asks
for it.  The ability to seek advice is to be based on the program's
meta-knowledge of its own knowledge and capabilities.  The ability to take
advice is to be based on the program's ability to "compile" received advice
into the most appropriate form for its own internal use.  One important
mechanism by which advice can be brought to bear on problem solving is through
the descriptions involved in comparative partial matching (see 5 below).

3.  %2Integration of the frames/units and predicate calculus approaches to
knowledge representation%1.  Though frame- and unit-based styles of
representation are often contrasted with predicate calculus representations,
there is no inherent incompatibility between the two, as has sometimes been
noticed in the AI literature.  What has not been attempted, however, and what
we intend to achieve for the ANALYST, is a thoroughgoing logical
interpretation of every significant aspect of a unit-style representation.
The basic organizing structure of ANALYST's memory will be the %2concept%1,
which is essentially an indexed set of propositions and procedures (or
propositional and procedural schemata), which itself serves as a higher-level
indexing entry for its contents.  In general, concepts have a logical form and
an associated truth-theoretic semantics, and are to be implemented as %2units%1.
Part of the indexing associated with a concept is a set of descriptions and/or
abstractions of its content, which are to serve as "advertisements" 
of the concept for use in partial comparative matching.

4.  %2A "competing considerations" analysis of causal reasoning as the basis
for solution of the frame and qualification problems%1.  Causal reasoning is a
special form of %2commonsense factual reasoning%1, in which the drawing of
conclusions results from the weighing and composition of all reasonably
discoverable relevant considerations (cf. [Creary 1981b]).  Causal
sub-considerations arise from the discovery of general states of affairs
(i.e., patterns of facts) that are known to give rise in a lawlike way to
causal influences (generalized forces) bearing on the truth of a given target
proposition.  A causal conclusion is justified only if a reasonable effort
has been made to discover all states of affairs that are causally relevant
to it ("Is anything wrong with the boat that would tend to prevent its use in
the normal way?").  This approach to the frame and qualification problems
eliminates frame axioms and overly long qualifications of preconditions
in favor of implicit meta-premises that all reasonably discoverable relevant
considerations have been taken into account.

5.  %2Comparative partial matching as the basis for selection of the most
appropriate methods, procedures, abstract plan schemata, etc., during problem
solving%1.  Judgments of appropriateness involve bringing multiple goals and
constraints to bear in a way that permits each single goal or constraint to
%2influence%1 the judgment without completely determining it.  Partial and
comparative matching, where that matching is intelligently managed, can be
used as a vehicle for making such judgments (cf. [Gabriel 1980] and
[Hayes-Roth 1978]).  The essential idea is that by fully and properly
describing a situation, one is able to locate and incorporate techniques and
facts that bear upon several aspects of that situation at the same time;
similarly, multiple motivations can be weighed and explicit tradeoffs
considered.

6.  %2Fregean representation of propositional attitudes, coupled with the
simulative use of the program's own human-like thought mechanisms as a means
of recognizing and analyzing the beliefs, desires, plans, and intentions of
people and other intelligent organisms%1.  Information concerning
propositional attitudes is to be represented declaratively, in an extended
version of McCarthy's first-order theory of individual concepts and
propositions (see [McCarthy 1979] and [Creary 1979]).  This notation provides
a powerful, systematic, and intuitively motivated way of dealing with apparent
failures of extensionality and quantification into intentional constructions.
The sort of information that the framework is meant to handle is illustrated
by the content of this sentence: %2 "Ivanov believes that none of the authors
of the coded message knows that the code has been broken."%* The features of
this sentence that are of special interest here are i) the nesting or
iteration of propositional attitudes ("knows" within the scope of "believes"),
ii) the occurrence of a quantifier ("none") within the scope of a verb of
propositional attitude ("believes"), and iii) the occurrence of definite
descriptions (e.g., "the coded message") within the scope of "believes".

.continue
Instead of redundantly re-representing implicative connections among
propositions by means of declarative axioms at higher levels in the hierarchy
of concepts, such information is simply to be utilized as embedded in the
program's own ordinary inferential apparatus by using this apparatus to
indirectly simulate the reasoning of other intelligent organisms 
[Creary 1979].  This provides a natural and efficient approach to making
realistic inferences about the beliefs of others, and can be generalized to
handle desires, plans, and intentions, as well.  
.end

The next step in our design process will be to integrate the ideas from
Creary's design studies of the advice taker with the ideas from Gabriel's
thesis, in order to arrive at a detailed initial design for the problem
solver.

.cb Goals and Milestones

The initial problem solving task will be the Airport Problem described in
[McCarthy 1959]. Further experimentation will proceed with embellishments
and complications of that situation.  In particular, further travel planning
techniques will be added to the initial system through the means of advice
taking, and additional problems involving travel will be presented to it.
At this point, the program will be capable of %2simple plan production%1:
the construction of a travel plan, given a goal and suitable travel facts.

The above ability will accomplish the goal of establishing an %2advice 
taker%1 methodology, along with representations and procedures for comparative
matching, etc.
The next step will be to expand the capabilities of the problem-solver to
include %2simple plan recognition%1:  the attribution of a plan
to an agent given his partial behavior and his known goals. Thus, from a
set of actions and known goals, we want our system to supply the remaining
actions in the plan.

Next we want to include goal discovery as a further aspect of the 
recognition task.  Further capabilities include discovering
unusual or contrary to habit aspects of the plan implementation, which will be
used to reveal hidden goals of the external agent. We call these three
capabilities: %2simple plan recognition, goal-plan recognition, and concealed
goal-plan recognition%1.

.begin indent 0,3;
⊗  Initial design of simple plan producer - January 1982.

⊗  Implementation and preliminary experiments with simple
   plan producer completed; initial design of simple plan
   recognizer begun - April 1982.

⊗  Implementation and preliminary experimetns with simple
   plan recognizer underway - June 1981.

⊗  Refinement of simple plan recognizer and design of goal-plan
   recognizer additions begun - September 1982.

⊗  Implementation and preliminary experiments with goal-plan recognizer
   completed - January 1982

⊗  Refinement of goal-plan recognizer and design of concealed goal-plan
   recognizer additions begun - March 1982.

⊗  Concealed goal-plan recognizer completed October 1983
.end
.sss Architecture for Reasoning Programs

Jon Doyle is engaged in the development of a comprehensive architecture
for reasoning programs.  This architecture is based on the use of
.skip 1
.bs
1. multiple logical theories (extending those of Weyhrauch [1980]) and
non-monotonic inference records (as studied in [Doyle 1979] and
[McDermott and Doyle 1980]) for memory representation,

2. explicit statements of mental attitudes and their relations to the
global mental state (extending the proposals of [de Kleer, et al. 1977]),

3. deliberate actions, both mental and extra-mental, based on those
attitudes (extending [McDermott 1978]), and

4. a deliberate, reasoned decision-making process, called reasoned
deliberation, which plans decision-making steps and uses non-monotonic
inference records to conduct arguments about what to do.
.end

See the attached paper [Doyle 1981a] for a more detailed overview of these
techniques.

	The primary reference for this work is [Doyle 1980], but Doyle has
recently completed drafts of two new papers: [Doyle 1981a], submitted to
IJCAI-81, and [Doyle 1981b].  Drafts of three other papers are in preparation
[Doyle 1981c] [Doyle 1981d] [de Kleer and Doyle 1981].

	The goal of this work is the further development of these theories of
memory, reasoning, action, and decision-making, towards the end of eventual
implementation.

	The milestones of this effort are the completions of several papers
in progress explaining in detail the improved theories and their relations
to other AI theories and techniques.  Serious implementation of these ideas
may have to be postponed pending the availability of suitable computational
facilities, although preliminary studies are foreseen.


.sss Formalization of Mathematical Reasoning

	Ketonen has been working on formalization of mathematical 
reasoning. The emphasis has been on creating a system and a language
which would allow the expression and verification of mathematical 
facts in a direct and natural way. The approach taken has been quite
successful: Ketonen has been able to produce an elegant and eminently
readable proof of Ramsey's theorem in under 100 lines.

	Ramsey's theorem has been used as a benchmark to test the power
of proof checking systems: The previous known shortest computer-checked proof
is due to Bulnes who proved it in about 400 lines.

	The above result is important in that it indicates that
mathematical proofs previously thought to be too complex for 
mechanical verification are now within the reach of sufficiently powerful
proof checking systems. In particular, Ketonen intends to apply the
techniques that he has developed to verifying the correctness of
the various parsing algorithms that are currently in fashion.
These types of results are, of course, of great interest in the field of
compiler design.

	Ketonen's approach is based on simple, yet powerful, proof 
construction primitives that allow several "proof modules" to 
coexist simultaneously in core and permit arbitrary linkages between
such modules. The use of these primitives can be programmed in LISP
if one so desires. 

	One of Ketonen's techniques is a new type of a 
decision procedure used to verify simple facts in typed predicate calculus.
It was found during Ketonen's work on fragments of predicate calculus 
admitting fast decision procedures. Predicate logic has been traditionally 
viewed as the place to formalize the notion of a "logical inference".
Intuitively it is clear that any "trivial" inference should be easily
formalizable and decidable in predicate logic. As is well known,
the full system of predicate calculus is not decidable; thus one is led
to study suitable fragments of predicate logic.  Ketonen has isolated a 
fragment that seems particularly promising: In some intuitive sense it 
captures the notion of a "direct" inference. The corresponding decision
procedure works in exponential time and is tailored to situations that
occur naturally in actual mathematical practice. 
This work will be described in detail in a forthcoming paper 
by Ketonen and Weyhrauch.  Ketonen intends to continue his work on 
algorithms for typed predicate calculus - partial higher order unification
algorithms seem to be of particular interest in this regard.
Another advantage of cleanly presented "modular" proofs is their
easy availability for computational purposes: Thus one can use the 
well known techniques of proof normalization to extract computational
information from the proofs in question.

	Ketonen intends to apply these
techniques to study variants of Ramsey's theorem and to the question 
of computing feasible bounds for the numbers occurring in Van der Waerden's 
theorem on arithmetic progressions. Once developed, these techniques can be
used to study program and proof transformations in general.

During the next two years Ketonen plans to:
.skip 1
.bs

1. apply the techniques for proof checking that he has
developed to verifying the correctness of the various parsing algorithms
that are currently in fashion.

2. develop more powerful proof manipulation techniques in order
to study transformations of programs and proofs.  This methodology will then be
used to extract computational information out of correctness proofs of
programs and other mathematical facts.

3. continue his study of algorithms for typed predicate calculus
with special emphasis on partial higher order unification algorithms.
.end

.sss Formal Systems for Computation Theories


Carolyn Talcott is 
studying formal systems for representing structures, processes,
and reasoning related to computation.
The goal is to develop a formal system in which the commonly
encountered  concepts and principles can be expressed naturally.   
To begin with attention will be focused on a LISP-like language.
Formalization of properties of computations and principles for 
verification of programs in the full language will demonstrate
that this goal is a reaonable one.

.cb  Applications

Although the initial work will be abstract and technical, there
are many practical applications.  Some of these are summarized
below along with some important extensions.


⊗  Mechanization of the system via formalization of metatheory and model theory.
The intended model of the system will be essential the data structures
and programs for LISP like language.  The structures for representing
the mechanizable aspects of the metatheory can be naturally encoded 
in the intended model.

⊗  Proving properties of `real' LISP programs.  We will be able to
treat programs that contain arbitrary assignments, non-local control such
as CATCH and THROW,  as well as programs containing MACROs.

⊗  Mechanical verification of proofs.  Natural representation of
proofs will facilitate the interaction between the user and the proof
checker.  As in the FOL formalism [Weyhrauch 1977] having models, theories
and metatheories available will facilitate development of proof
schemes and incorporating them as derived rules.
For example the [Boyer Moore 1979] principles of induction and recursion 
can be formulated and used.  Also heuristics for manipulation
of terms, formulae and proofs can be formulated and applied to
make the computer a more `intelligent' assistant.

⊗  Abstract data structures (domains and operations) can be formulated
in a natural manner within the proposed system.  Thus it can be used
as a  framework for understanding data abstraction,
and in particular for analyzing the notion of parameterized data type
which has been the subject of recent investigation from several points
of view.  See for example [Thatcher, Wagner and Wright 1979] and
[Cartwright 1980].


⊗  Formalization of the notion of a computation system = computation
theory + implementation (model).  
Here we move from the static aspects of computation
to the dynamic aspects of interactive systems.
An interactive and dynamically changing system is explained
in terms of a theory, model and interpretation (dynamically) 
connecting the two.   


⊗  Formalization of the dynamic aspects provides a
framework for talking about activities such as declarations and definitions,
DEBUGGING, for treating ERROR conditions, and distinguishing kinds of
undefinedness -- abnormal termination versus non-termination.


.cb What is to be formalized

In order to motivate the criteria given below for a suitable
formal system for theories of computation we describe some of the 
main ideas to be formalized.

1. Levels of computation (a la LISP) and interconnections.

Talcott has been working on methods for formulating
in a uniform and simple manner the computations defined
by various classes of programs.  The `universe' is taken
to be programs in a LISP-like language  where applicative,
imperative,iterative, recursive and many other styles of
programming are possible.  
The idea is to define natural
subclasses and formulate natural embeddings among the classes
where appropriate.  Thus reasoning and specification of
properties can be carried out at an appropriate level of detail
and the results will extend naturally to more the more complex
situations.  The results are in general uniformly
parameterized by the computation domain (the collection of 
data and operations taken to be primitive).

Two notions of computation are distinguished, which are called
respectively `computation over a data structure' and 
`computation over a memory structure'.  
In the former, the primitive operations construct and examine data.
(e.g in pure LISP we have CAR, CDR, CONS, EQUAL, ATOM)  In the latter
additional destructive primitive operations are allowed.  (e.g.
RPLACs, array assignments, etc.)
The same basic language serves to define either form of computation.
In either case computations defined by natural syntactic subclasses
of the languange are considered.  These include pure recursive (no loops
or assignments), iterative (loops with assignment limited to
local context), mixed iterative and recursive, addition of unlimited
assignment and non-local control capabilities.
Computations of either form defined by an abstract machine (a la LAP)
are also treated.

2. Verification technology -- partial correctness, total correctness, termination.
Many methodologies for verification of particular classes of programs
have been developed.  Some of the major ones include
invariant assertions, intermittent assertions, and subgoal assertions
for flow chart type programs, subgoal induction, Scott (computation)
induction, and 	McCarthy-Cartwright methods for proving properties 
of recursive programs.  The principles can be formulated to apply
to the appropriate subclass of computations from (1) and 
thus be applied and understood in a single context.

3.  Computations and programs as data.
Here we include such standard ideas as 
analysis of the computation defined by a program and
operations on programs such as compilation and optimization.
In addition there is the notion of derived program developed
by McCarthy.  Derived programs  compute properties of
the original program. Thus various complexity measures
(depth of recursion, number of CONSes, number of subroutine calls)
can be expressed via derived programs.  Also structures representing
traces or complete computation trees can be computed by derived 
programs.  There are general methods for deriving derived programs.
Formalization of these methods and of the computation defined
by programs will provide means for proving that the derived
programs are `correct'.

4. Abstract Data Types.
An important technology for programming is the definition of data types.
This includes definition of the domain and primitive operations.
There are various approaches specifying the means of definition,
one is the `constructive approach' of [Cartwright 1980].
Constructions should include cartesian product, combinatory operations,
and restriction to decidable subsets.  In addition means for
definition of functions operating on the defined data types and
principles for proving properties are essential.

.cb Suitable formal systems

Talcott has been studying systems developed by logicians
for formalizing various kinds of mathematics including
constructive analysis, constructive algebra, and logic.
Many of the results can be easily modified to apply to the project proposed.
Of particular interest is the work of Feferman[1978,1981] 
on formal systems for constructive and explicit mathematics (analysis).
Below are outlined some of the features essential to
a system in which the ideas sketched above can be formalized naturally.


1. Functions are part of the domain of discourse.  Thus
the notions of functions as arguments and functions as values are 
easily represented.  Classes of functions can be discussed.
Principles for proving properties of special classes of
functions, and procedures for manipulating or constructing
special classes of functions can be formulated.

Among the available means for definition of functions
will be explicit definition (abstraction and application)
iteration (corresponding to loop programs) and recursion.  
For major classes of programs, the function computed can
be expressed naturally as a function of the system.


2.  A notion of set or class is needed.  These should be
closed under such operations (boolean operations, cartesian product, 
partial function spaces) .
Limited principles of inductive definition together with the
corresponding principles for proof by induction are also needed.

3.  The system will be `parameterized' by the intended
computation domain (the primitive data types and operations).
This reflects the principle of separation of control and data
which is important for modularity and stability of a computation
system.




.cb Goals and Milestones

⊗  Specify in detail the ideas to be formalized, giving complete
informal definitions and proofs.

⊗  Define the basic formal system and enrich the collection of
available formal notions via definitions and derived principles so
that the work of formalization can be carried out in a
direct and natural manner.

⊗  Carry out detailed formaliztion for a variety of examples.  

⊗  Use formalization and properties of basic formal system
to explain relation among various verification technologies.

⊗  Prove some non-trivial properties of programs to demonstrate
the expressive power of the formal system.

.SS The Formal Reasoning Group

John McCarthy is Professor of Computer Science and has worked in the area
of formal reasoning applied to computer science and artificial
intelligence since 1957.  He is working on the logic of knowledge and
belief, the circumscription mode of reasoning, formalizing properties of
programs in first order logic and a general theory of patterns.

Lewis Creary is Research Associate in Computer Science, received his
doctorate from Princeton University in philosophy, and has been concerned with
epistemology, semantics, logic, and a computational system for inductive
inference.  He is working on epistemological problems of AI, representation of
knowledge, and development of the new advice-taking ANALYST program.

Jon Doyle is a Research Associate in Computer Science.  He received a
doctorate from the Massachusetts Institute of Technology, and has studied
inference records, non-monotonic logics, and architectures for reasoning
programs.  He is developing theories and techniques for memory representation,
mental attitudes, action-taking, decision-making, and learning.

Richard Gabriel is Research Associate in Computer Science, has
a Ph.D. from Stanford in computer science, and has been active in the
fields of computer vision, natural language understanding, knowledge
representation, programming environments, programming language development,
natural language generation, and organizations for very large intelligent
programs.  He is working on the design and implementation of advice takers.

Jussi Ketonen is a Research Associate in Computer Science.
He recieved his Ph.D. from University of Wisconsin in mathematical 
logic . He is interested in the mechanizable aspects of formal reasoning.


Carolyn Talcott has a Ph.D. in chemistry from UC Berkeley and has
done research in theoretical chemistry and theory of finite groups.
She is currently interested in the formalization 
of concepts and activities relating to computation, and the
eventual application of the results to representation of reasoning
and problem solving activities.

.SS References

.bib

[Aiello 1980]
Aiello, L.,
"Evaluating functions defined in first-order logic" 
(submitted to  Logic Programming Conference,  Budapest, July 1980).

[Aiello and Weyhrauch 1980]
Aiello, L. and Weyhrauch, R. W.,
"Using meta-theoretic Reasoning to do Algebra",
%2Proc. Automatic Deduction Conference%1, 
Les Arcs (France), July 8-11, 1980.

[Boyer, Moore 1979]
Boyer, R.S., and Moore, J.S.,
"A Computational Logic"
Academic Press, New York, 1979.

[Brooks 1980]
Brooks, M.,
"Determining Correctness by Testing", 
Stanford AI Memo AIM-336 (May 1980).

[Bulnes-Rozas 1979]
Bulnes-Rozas, J. B.,
"GOAL: A Goal Oriented Command Language for Interactive Proof Constructions"
Stanford AI Memo AIM-328, (June 1979).

[Cartwright 1980]
Cartwright, R.,
"A Constructive Alternative to Axiomatic Data Type Definitions",
Proceedings LISP Conference, Stanford (1980).

[Cartwright and McCarthy 1979]
Cartwright, R. and McCarthy, J.,
"Recursive Programs as Functions in a First Order Theory",
Stanford AI Memo AIM-324, (March 1979).

[Creary 1979]
Creary, Lewis G., "Propositional Attitudes: Fregean Representation and
Simulative Reasoning," %2Proceedings of the Sixth International Joint
Conference on Artificial Intelligence%1, Tokyo, Japan: August 1979, 176-181.

[Creary 1981a]
"Causal Explanation and the Reality of Natural Component Forces,"
forthcoming in %2Pacific Philosophical Quarterly%1 62 (April, 1981).

[Creary 1981b]
"On the Epistemology of Commonsense Factual Reasoning:  Toward an Improved
Theoretical Basis for Reasoning Programs." (in final stage of preparation)

[Creary 1981c]
"On the Nature of Causal Reasoning:  The Frame and Qualification Problems."
(in preparation)

[Creary 1981d]
"Neo-Fregean Foundations for Action Theory:  Semantics for AI Representations."
(in preparation)

[de Kleer, Doyle, Steele, and Sussman 1977]
de Kleer, J., Doyle, J., Steele, G., and Sussman, G., 
"Explicit Control of Reasoning",
MIT AI Lab, Memo 427 (1977).

[de Kleer and Doyle 1981]
de Kleer, J., and Doyle, J., 
"Self-interpretation and the Control of Reasoning",
(in preparation).

[Doyle 1979]
Doyle, J.,
"A Truth Maintenance System",
Artificial Intelligence 12, (1979),
pp. 231-272.

[Doyle 1980]
Doyle, J.,
"A Model for Deliberation, Action, and Introspection",
MIT AI Lab, TR-581, (1980).

[Doyle 1981a]
Doyle, J.,
"Sketch of a Model for Deliberation, Action, and Introspection",
(draft, submitted to IJCAI-81).


[Doyle 1981b]
Doyle, J.,
"Three Short Essays on Decisions, Reasons, and Logics",
(draft, Stanford University memo).

[Doyle 1981c]
Doyle, J.,
"A Theory of Memory",
(in preparation).

[Doyle 1981d]
Doyle, J.,
"Steps Towards Artificial Intelligence",
(in preparation).

[Feferman 1979]
Feferman, S.,
"Constructive Theories of Functions and Classes",
in Boffa,M. vanDalen,D. McAloon,K.(eds.)
%2Logic Colloquium 78: U of Mons, Belgium (August 1978)%1
North-Holland (1979)
pp. 159-224.

[Feferman 1981]
Feferman, S.,
"Theories of Variable Finite Type",
(draft).

[Gabriel 1980] 
Richard P. Gabriel, %2An Organization for Programs in Fluid Domains%1,
Ph.D. Dissertation, Computer Science Department, Stanford University,
December 1980.

[Goad 1980a]
Goad, C. A.,
"Proofs as descriptions of computations", 
%2Proc. Automatic Deduction Conference%1, 
Les Arcs (France), July 8-11, 1980.

[Goad 1980b]
Goad, C. A.,
"Computational Uses of the Manipulation of Formal Proofs",
Stanford Computer Science Department Memo, STAN-CS-80-819 (1980).

[Hayes-Roth 1978]
Hayes-Roth, Frederick, "The Role of Partial and Best Matches in 
Knowledge Systems", in %2Pattern-Directed Inference Systems%1,
edited by D.A. Waterman and Frederick Hayes-Roth,
New York:  Academic Press, 1978.

[Ketonen 1981]
Ketonen, J.,
"Efficient theorem proving in set theory",
(To appear).

[Ketonen 1981]
Ketonen, J.,
"On a decidable fragment of predicate calculus",
(To appear).

[Ketonen and Weyhrauch 1981]
Ketonen, J., and Weyhrauch, R. W.,
"A semidecision procedure for predicate calculus",
(To appear).

[Manna and Waldinger 1978]
Manna, Z. and Waldinger, R., "Is SOMETIME Sometimes Better than ALWAYS?
Intermittant Assertions in Proving Program Correctness", CACM, 21 (1978),
pp. 159-172.

[McCarthy 1959] 
McCarthy, J., "Programs with Common Sense", 
Proc. Int. Conf. on Mechanisation of Thought Processes, Teddington, England, 
National Physical Laboratory, (1959).

[McCarthy and Hayes 1969]  
McCarthy, J. and Hayes, P., "Some Philosophical Problems from the
Standpoint of Artificial Intelligence", AIM-73, November 1968; also in
D. Michie (ed.), <Machine Intelligence,> American Elsevier, New York, (1969).

[McCarthy 1977]
McCarthy, J.,
"Epistemological Problems of Artificial Intelligence",
Proc. of the 5th International Joint Conference on Artificial Intelligence,
(1977) (Invited Paper).

[McCarthy 1979] 
McCarthy, J.,
"First Order Theories of Individual Concepts and Propositions",
in Michie, Donald (ed.) %2Machine Intelligence 9%1, (University of
Edinburgh Press, Edinburgh), 
Stanford AI Memo AIM-325 (March 1979).


[McCarthy 1980a] 
McCarthy, J.,
"Circumscription - A Form of Non-Monotonic Reasoning", %2Artificial
Intelligence%1, Volume 13, Numbers 1,2, April.
Stanford AI Memo AIM-334 (February 1980).


[McCarthy 1980b] 
McCarthy, J.,
"The Elephant Language for Representing Programs"
(to appear).

[McCarthy and Talcott 1980]
McCArthy, J., and Talcott, C.,
%2LISP:Programming and Proving%1,
(to appear, available as Stanford CS206 Course Notes, Fall 1980)

[McDermott 1978]
McDermott, D.,
"Planning and Acting",
Cognitive Science 2, (1978), 
pp. 71-109.

[McDermott and Doyle 1980]
McDermott, D., and Doyle, J.,
"Non-monotonic Logic I",
Artificial Intelligence 13, (1980),
pp. 41-72.

[Talcott 1981]
Talcott, C.,
"Reasoning About LISP: a Tutorial in Formalizing Properties of LISP Programs
and Their Proofs."
Stanford AI Memo, forthcoming, (1981).

[Talcott and Weyhrauch 1981]
Talcott, C. and Weyhrauch, R. W.,
"The Blind Robot Puzzle: A Paradigm for Representing and Using Partial Knowldege",
Stanford AI Memo, forthcoming, (1981).


[Thatcher, Wagner and Wright 1979]
Thatcher, J. W., Wagner, E. G. and Wright, J. B.,
"Data Type Specification:
	Parameterization and the Power of Specification Techniques",
IBM Research report RC-7757(1979).

[Weyhrauch 1977]
Weyhrauch, R.W.,
"A Users Manual for FOL", Stanford AI Memo AIM-235.1, (July 1977).

[Wilkins 1979]
Wilkins, D. E.,
"Using Patterns and Plans to Solve Problems and Control Search"
Stanford AI Memo AIM-329, (June 1979).

.end